Improving Asymptotic Variance of MCMC Estimators: Non-reversible Chains are Better
نویسنده
چکیده
I show how any reversible Markov chain on a finite state space that is irreducible, and hence suitable for estimating expectations with respect to its invariant distribution, can be used to construct a non-reversible Markov chain on a related state space that can also be used to estimate these expectations, with asymptotic variance at least as small as that using the reversible chain (typically smaller). The non-reversible chain achieves this improvement by avoiding (to the extent possible) transitions that backtrack to the state from which the chain just came. The proof that this modification cannot increase the asymptotic variance of an MCMC estimator uses a new technique that can also be used to prove Peskun’s (1973) theorem that modifying a reversible chain to reduce the probability of staying in the same state cannot increase asymptotic variance. A non-reversible chain that avoids backtracking will often take little or no more computation time per transition than the original reversible chain, and can sometime produce a large reduction in asymptotic variance, though for other chains the improvement is slight. In addition to being of some practical interest, this construction demonstrates that non-reversible chains have a fundamental advantage over reversible chains for MCMC estimation. Research into better MCMC methods may therefore best be focused on non-reversible chains.
منابع مشابه
Improving the Asymptotic Performance of Markov Chain Monte-Carlo by Inserting Vortices
We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the non-reversible chain is reduced. The method is applicable to any reversible chain whose states are not connected through a tree, and can be interpreted graphically as inserting vortices into the state transitio...
متن کاملCovariance Ordering for Discrete and Continuous Time Markov Chains
The covariance ordering, for discrete and continuous time Markov chains, is defined and studied. This partial ordering gives a necessary and sufficient condition for MCMC estimators to have small asymptotic variance. Connections between this ordering, eigenvalues, and suprema of the spectrum of the Markov transition kernel, are provided. A representation of the asymptotic variance of MCMC estim...
متن کاملNotes on Using Control Variates for Estimation with Reversible MCMC Samplers
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC s...
متن کاملNonasymptotic bounds on the estimation error of MCMC algorithms
We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is non-asymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function f. The bound is sharp in the sense that the leading term is exactly σ as(P, f)/n, where σ 2 as(P, f) is the CLT asymptoti...
متن کاملNonasymptotic bounds on the estimation error for regenerative MCMC algorithms∗
MCMC methods are used in Bayesian statistics not only to sample from posterior distributions but also to estimate expectations. Underlying functions are most often defined on a continuous state space and can be unbounded. We consider a regenerative setting and Monte Carlo estimators based on i.i.d. blocks of a Markov chain trajectory. The main result is an inequality for the mean square error. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004